5 research outputs found

    Enhanced life-size holographic telepresence framework with real-time three-dimensional reconstruction for dynamic scene

    Get PDF
    Three-dimensional (3D) reconstruction has the ability to capture and reproduce 3D representation of a real object or scene. 3D telepresence allows the user to feel the presence of remote user that was remotely transferred in a digital representation. Holographic display is one of alternatives to discard wearable hardware restriction, it utilizes light diffraction to display 3D images to the viewers. However, to capture a real-time life-size or a full-body human is still challenging since it involves a dynamic scene. The remaining issue arises when dynamic object to be reconstructed is always moving and changes shapes and required multiple capturing views. The life-size data captured were multiplied exponentially when working with more depth cameras, it can cause the high computation time especially involving dynamic scene. To transfer high volume 3D images over network in real-time can also cause lag and latency issue. Hence, the aim of this research is to enhance life-size holographic telepresence framework with real-time 3D reconstruction for dynamic scene. There are three stages have been carried out, in the first stage the real-time 3D reconstruction with the Marching Square algorithm is combined during data acquisition of dynamic scenes captured by life-size setup of multiple Red Green Blue-Depth (RGB-D) cameras. Second stage is to transmit the data that was acquired from multiple RGB-D cameras in real-time and perform double compression for the life-size holographic telepresence. The third stage is to evaluate the life-size holographic telepresence framework that has been integrated with the real-time 3D reconstruction of dynamic scenes. The findings show that by enhancing life-size holographic telepresence framework with real-time 3D reconstruction, it has reduced the computation time and improved the 3D representation of remote user in dynamic scene. By running the double compression for the life-size holographic telepresence, 3D representations in life-size is smooth. It has proven can minimize the delay or latency during acquired frames synchronization in remote communications

    Transfer Learning-Driven Hourly PM2.5 Prediction Based on a Modified Hybrid Deep Learning

    No full text
    Haze is a major problem in China’s air pollution, which not only hinders economic development, but also causes harm to people’s health. PM2.5 (fine particulate matter) is the primary cause of haze. Therefore, the timely prevention and control of haze benefit the precise forecast of PM2.5 concentration. Air quality has high-dimensional, non-linear and complex characteristics. In this paper, a modified hybrid deep learning model is proposed under the framework of transfer learning, which can solve the problem of air quality prediction in the case of sparse data. The research focuses on solving the problem of inadequate feature extraction in existing studies, and and predicts PM2.5 concentration at multiple sites. In the domain of adaptive extraction of air pollutant characteristics, long and short-term neural networks and multi-layer perceptron are used to realize the long-term dependence and nonlinear transformation of features, respectively. The learned features can be shared by PM2.5 prediction tasks at multiple sites. The channel and spatial attention mechanisms are added to extract the key information in the representation target. In the whole network, the residual neural unit is used to increase the depth of the network and improve prediction accuracy. This paper discusses the experimental results in Beijing dataset from 2013 to 2017 and Hengshui dataset from 2020 to 2022. Based on the findings, it shows that compared with the classical deep learning models, hybrid deep learning models and the most recent transfer learning approaches, the network can obtain higher accuracy and better robustness, especially for the prediction of sites with sparse data. The RMSE value of TL-Modified compared with TL-LSTM and TL-CNN-LSTM models decreased by 38%, 16.5% and 25.6% at different sites, respectively

    Real-Time 3D Reconstruction Method for Holographic Telepresence

    No full text
    This paper introduces a real-time 3D reconstruction of a human captured using a depth sensor and has integrated it with a holographic telepresence application. Holographic projection is widely recognized as one of the most promising 3D display technologies, and it is expected to become more widely available in the near future. This technology may also be deployed in various ways, including holographic prisms and Z-Hologram, which this research has used to demonstrate the initial results by displaying the reconstructed 3D representation of the user. The realization of a stable and inexpensive 3D data acquisition system is a problem that has yet to be solved. When we involve multiple sensors we need to compress and optimize the data so that it can be sent to a server for a telepresence. Therefore the paper presents the processes in real-time 3D reconstruction, which consists of data acquisition, background removal, point cloud extraction, and a surface generation which applies a marching cube algorithm to finally form an isosurface from the set of points in the point cloud which later texture mapping is applied on the isosurface generated. The compression results has been presented in this paper, and the results of the integration process after sending the data over the network also have been discussed

    Real-time 3D reconstruction method for holographic telepresence

    Get PDF
    This paper introduces a real-time 3D reconstruction of a human captured using a depth sensor and has integrated it with a holographic telepresence application. Holographic projection is widely recognized as one of the most promising 3D display technologies, and it is expected to become more widely available in the near future. This technology may also be deployed in various ways, including holographic prisms and Z-Hologram, which this research has used to demonstrate the initial results by displaying the reconstructed 3D representation of the user. The realization of a stable and inexpensive 3D data acquisition system is a problem that has yet to be solved. When we involve multiple sensors we need to compress and optimize the data so that it can be sent to a server for a telepresence. Therefore the paper presents the processes in real-time 3D reconstruction, which consists of data acquisition, background removal, point cloud extraction, and a surface generation which applies a marching cube algorithm to finally form an isosurface from the set of points in the point cloud which later texture mapping is applied on the isosurface generated. The compression results has been presented in this paper, and the results of the integration process after sending the data over the network also have been discussed

    Fingertips interaction method in handheld augmented reality for 3D manipulation

    No full text
    Augmented Reality (AR) technology enhanced real world environment with virtual information either in two-dimensional (2D) or three-dimensional (3D) format. AR requires the interactive techniques to be as intuitive as possible for the user to accept. The issues to be addressed are the tracking inaccuracy and occlusion problems. Therefore, a prototype that enables fingertip manipulation method in handheld AR interface is developed. This paper explains user ability to interact directly with the virtual 3D object and manipulate it using fingertip-based gestures in handheld AR. It is used to perform 3D object manipulation for users to naturally interact and manipulate the virtual 3D objects. The paper describes the development phase, AR application with fingertip-based manipulation method to manipulate 3D object. Gesture tracking device is attached to handheld device to track the user's natural hand interaction. The result presents in this research is a new prototype to Ancient Malacca AR application with fingertip-based manipulation method for handheld AR interface
    corecore